Variance reduction in sample approximations of stochastic programs

نویسنده

  • Matti Koivu
چکیده

This paper studies the use of randomized Quasi-Monte Carlo methods (RQMC) in sample approximations of stochastic programs. In high dimensional numerical integration, RQMC methods often substantially reduce the variance of sample approximations compared to MC. It seems thus natural to use RQMC methods in sample approximations of stochastic programs. It is shown, that RQMC methods produce epi-convergent approximations of the original problem. RQMC and MC methods are compared numerically in five different portfolio management models. In the tests, RQMC methods outperform MC sampling substantially reducing the sample variance and bias of optimal values in all the considered problems.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On Rates of Convergence for Stochastic Optimization Problems under Non-i.i.d. Sampling

In this paper we discuss the issue of solving stochastic optimization problems by means of sample average approximations. Our focus is on rates of convergence of estimators of optimal solutions and optimal values with respect to the sample size. This is a well studied problem in case the samples are independent and identically distributed (i.e., when standard Monte Carlo is used); here, we stud...

متن کامل

Topics on Monte Carlo Simulation-Based Methods for Stochastic Optimization Problems: Stochastic Constraints and Variance Reduction Techniques

We provide an overview of two select topics in Monte Carlo simulationbased methods for stochastic optimization: problems with stochastic constraints and variance reduction techniques. While Monte Carlo simulation-based methods have been successfully used for stochastic optimization problems with deterministic constraints, there is a growing body of work on its use for problems with stochastic c...

متن کامل

A probability metrics approach for reducing the bias of optimality gap estimators in two-stage stochastic linear programming

Monte Carlo sampling-based estimators of optimality gaps for stochastic programs are known to be biased. When bias is a prominent factor, estimates of optimality gaps tend to be large on average even for high-quality solutions. This diminishes our ability to recognize high-quality solutions. In this paper, we present a method for reducing the bias of the optimality gap estimators for two-stage ...

متن کامل

On Rates of Convergence for Stochastic Optimization Problems Under Non--Independent and Identically Distributed Sampling

In this paper we discuss the issue of solving stochastic optimization problems by means of sample average approximations. Our focus is on rates of convergence of estimators of optimal solutions and optimal values with respect to the sample size. This is a well-studied problem in case the samples are independent and identically distributed (i.e., when standard Monte Carlo simulation is used); he...

متن کامل

A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the govern...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Math. Program.

دوره 103  شماره 

صفحات  -

تاریخ انتشار 2005